Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
This study is part of a larger research project aimed at developing and implementing an NLP-enabled AI feedback tool called PyrEval to support middle school students’ science explanation writing. We explored how human-AI integrated classrooms can invite students to harness AI tools while still being agentic learners. Building on theory of new materialism with posthumanist perspectives, we examined teacher framing to see how the nature of PyrEval was communicated, thereby orienting students to partner with or rely on PyrEval. We analyzed one teacher’s talk in multiple classrooms as well as that of students in small groups. We found student agency was fostered through teacher framing of (a) PyrEval as a non-neutral actor and a co-investigator and (b) students’ participation as an author and their understanding of the nature of PyrEval as core task and purpose. Findings and implications are discussed.more » « lessFree, publicly-accessible full text available July 9, 2026
-
This study is part of a larger research project aimed at developing and implementing an NLP-enabled AI feedback tool called PyrEval to support middle school students’ science explanation writing. We explored how human-AI integrated classrooms can invite students to harness AI tools while still being agentic learners. Building on theory of new materialism with posthumanist perspectives, we examined teacher framing to see how the nature of PyrEval was communicated, thereby orienting students to partner with or rely on PyrEval. We analyzed one teacher’s talk in multiple classrooms as well as that of students in small groups. We found student agency was fostered through teacher framing of (a) PyrEval as a non-neutral actor and a co-investigator and (b) students’ participation as an author and their understanding of the nature of PyrEval as core task and purpose. Findings and implications are discussed.more » « lessFree, publicly-accessible full text available July 9, 2026
-
Automated feedback can provide students with timely information about their writing, but students' willingness to engage meaningfully with the feedback to revise their writing may be influenced by their perceptions of its usefulness. We explored the factors that may have influenced 339, 8th-grade students’ perceptions of receiving automated feedback on their writing and whether their perceptions impacted their revisions and writing improvement. Using HLM and logistic regression analyses, we found that: 1) students with more positive perceptions of the automated feedback made revisions that resulted in significant improvements in their writing, and 2) students who received feedback indicating they included more important ideas in their essays had significantly higher perceptions of the usefulness of the feedback, but were significantly less likely to engage in substantive revisions. Implications and the importance of helping students evaluate and reflect on the feedback to make substantive revisions, no matter their initial feedback, are discussedmore » « lessFree, publicly-accessible full text available June 9, 2026
-
As use of artificial intelligence (AI) has increased, concerns about AI bias and discrimination have been growing. This paper discusses an application called PyrEval in which natural language processing (NLP) was used to automate assessment and pro- vide feedback on middle school science writing with- out linguistic discrimination. Linguistic discrimination in this study was operationalized as unfair assess- ment of scientific essays based on writing features that are not considered normative such as subject- verb disagreement. Such unfair assessment is espe- cially problematic when the purpose of assessment is not assessing English writing but rather assessing the content of scientific explanations. PyrEval was implemented in middle school science classrooms. Students explained their roller coaster design by stat- ing relationships among such science concepts as potential energy, kinetic energy and law of conser- vation of energy. Initial and revised versions of sci- entific essays written by 307 eighth- grade students were analyzed. Our manual and NLP assessment comparison analysis showed that PyrEval did not pe- nalize student essays that contained non-normative writing features. Repeated measures ANOVAs and GLMM analysis results revealed that essay quality significantly improved from initial to revised essays after receiving the NLP feedback, regardless of non- normative writing features. Findings and implications are discussed.more » « lessFree, publicly-accessible full text available May 25, 2026
-
With an increasing focus in STEM education on critical thinking skills, science writing plays an ever more important role. A recently published dataset of two sets of college level lab reports from an inquiry-based physics curriculum relies on analytic assessment rubrics that utilize multiple dimensions, specifying subject matter knowledge and general components of good explanations. Each analytic dimension is assessed on a 6-point scale, to provide detailed feedback to students that can help them improve their science writing skills. Manual assessment can be slow, and difficult to calibrate for consistency across all students in large enrollment courses with many sections. While much work exists on automated assessment of open-ended questions in STEM subjects, there has been far less work on long-form writing such as lab reports. We present an end-to-end neural architecture that has separate verifier and assessment modules, inspired by approaches to Open Domain Question Answering (OpenQA). VerAs first verifies whether a report contains any content relevant to a given rubric dimension, and if so, assesses the relevant sentences. On the lab reports, VerAs outperforms multiple baselines based on OpenQA systems or Automated Essay Scoring (AES). VerAs also performs well on an analytic rubric for middle school physics essays.more » « less
An official website of the United States government
